Phoenix
TypeScript APIPython APICommunityGitHubPhoenix Cloud
English
  • Documentation
  • Self-Hosting
  • Cookbooks
  • SDK and API Reference
  • Release Notes
  • Resources
English
  • Arize Phoenix
  • Quickstarts
  • User Guide
  • Environments
  • Phoenix Demo
  • 🔭Tracing
    • Overview: Tracing
    • Quickstart: Tracing
      • Quickstart: Tracing (Python)
      • Quickstart: Tracing (TS)
    • Features: Tracing
      • Projects
      • Annotations
      • Sessions
    • Integrations: Tracing
      • OpenAI
      • OpenAI Agents SDK
      • LlamaIndex
      • LlamaIndex Workflows
      • LangChain
      • LangGraph
      • LiteLLM
      • Anthropic
      • Amazon Bedrock
      • Amazon Bedrock Agents
      • VertexAI
      • MistralAI
      • Google GenAI
      • Groq
      • Hugging Face smolagents
      • CrewAI
      • Haystack
      • DSPy
      • Instructor
      • OpenAI Node SDK
      • LangChain.js
      • Vercel AI SDK
      • LangFlow
      • BeeAI
      • Flowise
    • How-to: Tracing
      • Setup Tracing
        • Setup using Phoenix OTEL
        • Setup using base OTEL
        • Using Phoenix Decorators
        • Setup Tracing (TS)
        • Setup Projects
        • Setup Sessions
      • Add Metadata
        • Add Attributes, Metadata, Users
        • Instrument Prompt Templates and Prompt Variables
      • Feedback & Annotations
        • Capture Feedback on Traces
        • Evaluating Phoenix Traces
        • Log Evaluation Results
      • Importing & Exporting Traces
        • Import Existing Traces
        • Export Data & Query Spans
      • Advanced
        • Mask Span Attributes
        • Suppress Tracing
        • Filter Spans
        • Capture Multimodal Traces
    • Concepts: Tracing
      • How Tracing Works
      • FAQs: Tracing
      • What are Traces
  • 📃Prompt Engineering
    • Overview: Prompts
      • Prompt Management
      • Prompt Playground
      • Span Replay
      • Prompts in Code
    • Quickstart: Prompts
      • Quickstart: Prompts (UI)
      • Quickstart: Prompts (Python)
      • Quickstart: Prompts (TS)
    • How to: Prompts
      • Configure AI Providers
      • Using the Playground
      • Create a prompt
      • Test a prompt
      • Tag a prompt
      • Using a prompt
    • Concepts: Prompts
  • 🗄️Datasets & Experiments
    • Overview: Datasets & Experiments
    • Quickstart: Datasets & Experiments
    • How-to: Datasets
      • Creating Datasets
      • Exporting Datasets
    • Concepts: Datasets
    • How-to: Experiments
      • Run Experiments
      • Using Evaluators
  • 🧠Evaluation
    • Overview: Evals
      • Agent Evaluation
    • Quickstart: Evals
    • How to: Evals
      • Pre-Built Evals
        • Hallucinations
        • Q&A on Retrieved Data
        • Retrieval (RAG) Relevance
        • Summarization
        • Code Generation
        • Toxicity
        • AI vs Human (Groundtruth)
        • Reference (citation) Link
        • User Frustration
        • SQL Generation Eval
        • Agent Function Calling Eval
        • Agent Path Convergence
        • Agent Planning
        • Agent Reflection
        • Audio Emotion Detection
      • Eval Models
      • Build an Eval
      • Build a Multimodal Eval
      • Online Evals
      • Evals API Reference
    • Concepts: Evals
      • LLM as a Judge
      • Eval Data Types
      • Evals With Explanations
      • Evaluators
      • Custom Task Evaluation
  • 🔍Retrieval
    • Overview: Retrieval
    • Quickstart: Retrieval
    • Concepts: Retrieval
      • Retrieval with Embeddings
      • Benchmarking Retrieval
      • Retrieval Evals on Document Chunks
  • 🌌inferences
    • Quickstart: Inferences
    • How-to: Inferences
      • Import Your Data
        • Prompt and Response (LLM)
        • Retrieval (RAG)
        • Corpus Data
      • Export Data
      • Generate Embeddings
      • Manage the App
      • Use Example Inferences
    • Concepts: Inferences
    • API: Inferences
    • Use-Cases: Inferences
      • Embeddings Analysis
  • 🔌INTEGRATIONS
    • Phoenix MCP Server
    • Cleanlab
    • Ragas
  • ⚙️Settings
    • Access Control (RBAC)
    • API Keys
    • Data Retention
Powered by GitBook

Platform

  • Tracing
  • Prompts
  • Datasets and Experiments
  • Evals

Software

  • Python Client
  • TypeScript Client
  • Phoenix Evals
  • Phoenix Otel

Resources

  • Container Images
  • X
  • Blue Sky
  • Blog

Integrations

  • OpenTelemetry
  • AI Providers

© 2025 Arize AI

On this page
  • Features
  • Quickstarts
  • Next Steps
  • Try our Tutorials
  • Add Integrations
  • Community

Was this helpful?

Edit on GitHub

Arize Phoenix

AI Observability and Evaluation

NextQuickstarts

Last updated 28 days ago

Was this helpful?

Phoenix is an open-source observability tool designed for experimentation, evaluation, and troubleshooting of AI and LLM applications. It allows AI engineers and data scientists to quickly visualize their data, evaluate performance, track down issues, and export data to improve. Phoenix is built by , the company behind the industry-leading AI observability platform, and a set of core contributors.

Phoenix works with OpenTelemetry and instrumentation. See Integrations: Tracing for details.

Features

Phoenix offers tools to workflow.

  • - Create, store, modify, and deploy prompts for interacting with LLMs

  • - Play with prompts, models, invocation parameters and track your progress via tracing and experiments

  • - Replay the invocation of an LLM. Whether it's an LLM step in an LLM workflow or a router query, you can step into the LLM invocation and see if any modifications to the invocation would have yielded a better outcome.

  • - Phoenix offers client SDKs to keep your prompts in sync across different applications and environments.

Quickstarts

Running Phoenix for the first time? Select a quickstart below.

Next Steps

Check out a comprehensive list of example notebooks for LLM Traces, Evals, RAG Analysis, and more.

Add instrumentation for popular packages and libraries such as OpenAI, LangGraph, Vercel AI SDK and more.

Join the Phoenix Slack community to ask questions, share findings, provide feedback, and connect with other developers.

is a helpful tool for understanding how your LLM application works. Phoenix's open-source library offers comprehensive tracing capabilities that are not tied to any specific LLM vendor or framework.

Phoenix accepts traces over the OpenTelemetry protocol (OTLP) and supports first-class instrumentation for a variety of frameworks (, ,), SDKs (, , , ), and Languages. (, , etc.)

Phoenix is built to help you and understand their true performance. To accomplish this, Phoenix includes:

A standalone library to on your own datasets. This can be used either with the Phoenix library, or independently over your own data.

into the Phoenix dashboard. Phoenix is built to be agnostic, and so these evals can be generated using Phoenix's library, or an external library like , , or .

to attach human ground truth labels to your data in Phoenix.

let you test different versions of your application, store relevant traces for evaluation and analysis, and build robust evaluations into your development process.

to test and compare different iterations of your application

, or directly upload Datasets from code / CSV

, export them in fine-tuning format, or attach them to an Experiment.

Tracing
evaluate your application
run LLM-based evaluations
Direct integration of LLM-based and code-based evaluators
Ragas
Deepeval
Cleanlab
Human annotation capabilities
Phoenix Datasets & Experiments
Run Experiments
Collect relevant traces into a Dataset
Run Datasets through Prompt Playground
Add Integrations
Community
Try our Tutorials
Arize AI
OpenInference
streamline your prompt engineering
Prompt Management
Prompt Playground
Prompts in Code
Span Replay
LlamaIndex
LangChain
DSPy
OpenAI
Bedrock
Mistral
Vertex
Javascript
Phoenix Prompt Playground
Tracing in Phoenix
Evals in the Phoenix UI
Experiments in Phoenix
Python
Cover

Tracing

Cover

Prompt Playground

Cover

Datasets and Experiments

Cover

Evaluation

Cover

Inferences